Skip to main content

Dynamic Programming

Dynamic Programming (DP) is a technique used to optimize recursive algorithms by storing intermediate results, avoiding redundant calculations, and thus significantly improving computational efficiency. It is particularly useful for problems that exhibit overlapping subproblems and optimal substructure.

  • Overlapping Subproblems: Problems can be broken down into smaller, overlapping subproblems that are solved multiple times.
  • Optimal Substructure: An optimal solution can be constructed efficiently from optimal solutions to its subproblems.

Key Concepts

  • Memoization: A top-down approach where results of subproblems are stored (cached) for future use to avoid redundant calculations. It is implemented with recursion and caching.
  • Tabulation: A bottom-up approach that solves subproblems iteratively, starting from the smallest subproblems and building up solutions to the larger problem.

Advantages of Dynamic Programming

  • Efficiency: Reduces the time complexity significantly in many cases.
  • Simplicity: Provides a structured approach to solving complex problems.
  • Generalization: Solutions can be adapted to solve a range of similar problems.

Disadvantages of Dynamic Programming

  • Memory Usage: High memory consumption due to storage of all subproblem solutions.
  • Complexity: Implementing a dynamic programming solution can be complex and requires a deep understanding of the problem.

Optimal Substructure

Problem Description

  • Problem: Minimum Cost Climbing Stairs
    • You can climb 1 or 2 steps at a time.
    • Each step has a cost, represented by a non-negative integer.
    • Starting from the ground, determine the minimum cost to reach the top of the stairs.
  • Cost Example: For steps with costs of 11, 1010, and 11, the minimum cost to reach the 3rd step is 22.

Dynamic Programming Approach

  • Let dp[i]dp[i] be the minimum cumulative cost to reach the ii -th step.
  • The cost to reach the ii -th step is the minimum of the cost from the two previous steps plus the current step's cost: dp[i]=min(dp[i1],dp[i2])+cost[i]dp[i] = \min(dp[i-1], dp[i-2]) + cost[i]
  • Optimal Substructure Property: The optimal solution to a problem can be constructed from optimal solutions of its subproblems.

State Transition and Initial States

  • Transition Equation: Provided above.
  • Initial States:
    • dp[1]=cost[1]dp[1] = cost[1]
    • dp[2]=cost[2]dp[2] = cost[2]

Code for Dynamic Programming

def min_cost_climbing_stairs_dp(cost):
n = len(cost)
dp = [0] * (n + 1)
dp[1], dp[2] = cost[1], cost[2]
for i in range(3, n + 1):
dp[i] = min(dp[i-1], dp[i-2]) + cost[i-1]
return dp[n]

Space Optimization

  • Compressing the space used by the DP array from O(n)O(n) to O(1)O(1):
def min_cost_climbing_stairs_dp_comp(cost):
prev, curr = cost[0], cost[1]
for i in range(2, len(cost)):
next_step = min(prev, curr) + cost[i]
prev, curr = curr, next_step
return min(prev, curr)

Statelessness

Definition

  • Statelessness: A characteristic where the future development of a state depends only on the current state and is independent of how that state was reached.

Problems Illustrating Non-Statelessness

Stair Climbing with Constraints

  • Constraint: Cannot jump 1 step twice consecutively.
  • DP State Definition Expansion:
    • State [i,j][i, j] indicates being on the ii -th step with the last jump of jj steps.
    • Transition Equation: {dp[i,1]=dp[i1,2]dp[i,2]=dp[i2,1]+dp[i2,2]\begin{cases} dp[i, 1] = dp[i-1, 2] \\ dp[i, 2] = dp[i-2, 1] + dp[i-2, 2] \end{cases}
    • This problem illustrates that by expanding the state definition, we can still manage problems that initially seem to lack statelessness.

Special Case: Stair Climbing with Obstacle Generation

  • Problem Description: Climbing to step ii places an obstacle on step 2i2i.
  • Complexity: Future steps depend on all past jumps, demonstrating that some problems inherently defy statelessness, making dynamic programming a less viable solution. Alternative methods like heuristic search or genetic algorithms might be used for such problems.

Classic Examples of DP Problems

  • Fibonacci Sequence: Calculating the nth Fibonacci number.
  • Knapsack Problem: Choosing the most valuable items within a weight limit.
  • Longest Common Subsequence: Finding the longest sequence shared by two strings.
  • Coin Change Problem: Determining the minimum number of coins needed to make a specific amount.